5 research outputs found

    EMU: Rapid prototyping of networking services

    Get PDF
    Due to their performance and flexibility, FPGAs are an attractive platform for the execution of network functions. It has been a challenge for a long time though to make FPGA programming accessible to a large audience of developers. An appealing solution is to compile code from a general-purpose language to hardware using high-level synthesis. Unfortunately, current approaches to implement rich network functionality are insufficient because they lack: (i) libraries with abstractions for common network operations and data structures, (ii) bindings to the underlying “substrate” on the FPGA, and (iii) debugging and profiling support. This paper describes Emu, a new standard library for an FPGA hardware compiler that enables developers to rapidly create and deploy network functionality. Emu allows for high-performance designs without being bound to particular packet processing paradigms. Furthermore, it supports running the same programs on CPUs, in Mininet, and on FPGAs, providing a better development environment that includes advanced debugging capabilities. We demonstrate that network functions implemented using Emu have only negligible resource and performance overheads compared with natively-written hardware versions

    FairCache: Introducing Fairness to ICN Caching - Technical Report

    No full text
    Caching is a core principle of information-centric networking (ICN). Many novel algorithms have been proposed for enabling ICN caching, many of which rely on collaborative principles, i.e. multiple caches interacting to decide what to store. Past work has assumed entirely altruistic nodes that will sacrifice their own performance for the global optimum. In this paper, we argue that this assumption is flawed. We address this problem by modelling the in-network caching problem as a Nash bargaining game. We develop optimal and heuristic caching solutions that explicitly consider both performance and fairness. We argue that only algorithms that are fair to all parties will encourage engagement and cooperation. Through extensive simulations, we show our heuristic solution, FairCache, ensures that all collaborative caches achieve performance gains without undermining the performance of others.E
    corecore